AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Language Model Alignment

# Language Model Alignment

Decision Tree Reward Gemma 2 27B
Other
A decision tree reward model fine-tuned based on Gemma-2-27B, used to evaluate the quality of content generated by language models, with outstanding performance on the RewardBench leaderboard.
Large Language Model Transformers English
D
RLHFlow
18
6
URM LLaMa 3.1 8B
URM-LLaMa-3.1-8B is an uncertainty-aware reward model designed to enhance the alignment of large language models.
Large Language Model
U
LxzGordon
4,688
10
Gemma 2 9b It SimPO
MIT
Gemma 2.9B model fine-tuned on the gemma2-ultrafeedback-armorm dataset using the SimPO objective for preference optimization tasks
Large Language Model Transformers
G
princeton-nlp
21.34k
164
Llama 3 Instruct 8B SimPO
SimPO is a preference optimization method that eliminates the need for reference reward models, simplifying the traditional RLHF pipeline by directly optimizing language models with preference data.
Large Language Model Transformers
L
princeton-nlp
1,924
58
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase